373 research outputs found

    Efficient Computer Vision for Embedded Systems

    Get PDF
    The winners, as well as the organizers and sponsors of the IEEE Low-Power Computer Vision Challenge, share their insights into making computer vision (CV) more efficient for running on mobile or embedded systems. As CV (and more generally, artificial intelligence) is deployed widely on the Internet of Things, efficiency will become increasingly important

    Resource Estimation for Large Scale, Real-Time Image Analysis on Live Video Cameras Worldwide

    Get PDF
    Thousands of public cameras live-stream an abundance of data to the Internet every day. If analyzed in real-time by computer programs, these cameras could provide unprecedented utility as a global sensory tool. For example, if cameras capture the scene of a fire, a system running image analysis software on their footage in real-time could be programmed to react appropriately (perhaps call firefighters). No such technology has been deployed at large scale because the sheer computing resources needed have yet to be determined. In order to help us build computer systems powerful enough to achieve such lifesaving feats, we developed a model that estimates the computer resources required for an experiment of that magnitude. The team is creating an experiment to demonstrate the feasibility of analyzing real-time images in a large scale. More specifically, the experiment aims to retrieve and analyze one billion images in 24 hours. Preliminary study suggests that this goal is attainable. This experiment will study the accuracy and performance of state-of-the-art image analysis solutions and reveal directions for future improvement

    Strategy Selection for Product Service Systems Using Case-based Reasoning

    Get PDF
    A product service system integrates products and services in order to lower environmental impact. It can achieve good eco-efficiency and has received increase in the last decade. This study focuses on strategy selection for product service system design. Case-based reasoning is utilized to provide suggestions for finding an appropriate strategy. To build a case database, successful PSS cases from the literature and websites were collected and formulated. Twelve indices under three categories were analyzed and selected to describe cases. A lot of successful PSS cases and their information were collected. Forty seven cases were used in this study because of the completeness of information. The analytic hierarchic process is used to find the relative weights of the factors that relate to the selection of customers. These weights are used in calculating the similarity in the case-based reasoning process. The successful strategy of the most similar case is extracted and recommended for PSS strategy determination. More than 90% of tested cases obtained an appropriate strategy from the most similar case. Finally, two new products are introduced to find the best strategy for product service system design and development using the proposed case-based reasoning system

    Continuous Analysis of Many Internet Connected Cameras

    Get PDF
    There are many Internet connected cameras from all over the world containing a lot of useful information that goes undiscovered. Traffic cameras could monitor the amount of congestion on the highway. Outdoor cameras could monitor weather conditions and help develop more accurate weather models. Currently there is no common system that brings this camera data and a way to analyze it together. The goal of CAM2 is to create a system that lets users easily access this camera data and perform large-scale analysis on it to extract useful information. The structure of the system includes (i) a website that allows users to interact with the system, (ii) a database of thousands of publically accessible cameras, (iii) a manager that allocates and manages all the resources needed for analysis, and (iv) cloud computing instances used to execute analysis programs. The system uses the image-processing library OpenCV and an API to allow users to create their own image analysis programs that are compatible with CAM2. Users can also select from over a dozen provided analysis programs including motion analysis, object counting, and more. Once users select an analysis program or upload their own, they can choose from a selection of approximately 70,000 cameras to analyze. People can register on the CAM2 website, cam2.ecn.purdue.edu, to begin analyzing camera data and extracting information that would otherwise go undiscovered. Future work includes adding more public cameras to the system and adding more features to make the system easier to use and more powerful for users

    Is Real-Time Mobile Content-Based Image Retrieval Feasible?

    Get PDF
    Content-based image retrieval (CBIR) is a method of searching through a database of images by using another image as a query instead of text. Recent advances in the processing power of smart phones and tablets, collectively known as mobile devices, have prompted researchers to attempt to construct mobile CBIR systems. Most of the research that has been conducted on mobile CBIR has focused on improving either its accuracy or its run-time, but not both simultaneously. We set out to answer the question: is real-time CBIR with manageable accuracy possible on current mobile devices? To find the answer to this question, we ran tests using a compiled database of 930 high-resolution images on both a desktop computer and a Nexus 7 tablet. These tests examined the relationship between image resolution, matching method, and image descriptor type on match time and accuracy. By scaling down the images before matching them, we were able to achieve a run-time on Android of less than 10 seconds while maintaining 60% accuracy on average. These results suggest that a mobile CBIR system can be developed with current technology that can sufficiently balance accuracy and run-time

    GPU/CPU Performance of Image Processing Tasks for use in the CAM 2 System

    Get PDF
    Over the past several years, graphics processing units (GPU) have increasingly been viewed as the future of image processing engines. Currently, the Continuous Analysis of Many CAMeras (CAM2) project performs its processing on CPUs, which will potentially be more costly as the system scales to service more users. This study seeks to analyze the performance gains of GPU processing and evaluate the advantage of supporting GPU-accelerated analysis for CAM2 users. The platform for comparing the CPU and GPU performance has been the NVIDIA Jetson TK1. The target hardware implementation is an Amazon cloud instance, where final cost analysis will be performed. It is expected that the GPU will out perform its CPU counterpart in some image processing applications. The degree to which it outperforms the CPU is subject to a number of factors. So far, tests have shown the expected speedup (and lackthereof) in basic mathematical operations performed on the GPU, indicative of the expected success of the integration into the CAM2 system

    Development of a Web Application for Continuous Analysis of Many Cameras (CAM2)

    Get PDF
    There are tens of thousands of web cameras located around the world and publicly available on the Internet. The images captured by these cameras contain data relating to our living environment such as traffic patterns, weather and crowd movement. Researchers can capture this data using image analysis techniques on the video and image from these cameras. However, there is a lack of a single, unified repository of all the public cameras on the Internet; this, coupled with the computational demands of image analysis means there is a need for a tool to help researchers perform large-scale image analyis on many cameras. Continuous Analysis of Many Cameras (CAM2) is a framework that enables researchers to execute image analysis programs on many web camera image data on a large scale. Users can choose from a database of more than 70,000 cameras worldwide, and its custom application programming interface (API) enables users to upload their own image analysis programs for the system to execute. This paper will detail the structure and use of the CAM2 system through its website, and discuss updates to the system introduced in the July 2015 Alpha 1.4 release. These changes have improved stability and usability of the system, making CAM2 a more effective tool for researchers

    Investigating Dataset Distinctiveness

    Get PDF
    Just as a human might struggle to interpret another human’s handwriting, a computer vision program might fail when asked to perform one task in two different domains. To be more specific, visualize a self-driving car as a human driver who had only ever driven on clear, sunny days, during daylight hours. This driver – the self-driving car – would inevitably face a significant challenge when asked to drive when it is violently raining or foggy during the night, putting the safety of its passengers in danger. An extensive understanding of the data we use to teach computer vision models – such as those that will be driving our cars in the years to come – is absolutely necessary as these sorts of complex systems find their way into everyday human life. This study works to develop a comprehensive meaning of the style of a dataset, or the quantitative difference between cursive lettering and print lettering, with respect to the image data used in the field of computer vision. We accomplished this by asking a machine learning model to predict which commonly used dataset a particular image belongs to, based on detailed features of the images. If the model performed well when classifying an image based on which dataset it belongs to, that dataset was considered distinct. We then developed a linear relationship between this distinctiveness metric and a model’s ability to learn from one dataset and test on another, so as to have a better understanding of how a computer vision system will perform in a given context, before it is trained

    Camera Placement Meeting Restrictions of Computer Vision

    Get PDF
    In the blooming era of smart edge devices, surveillance cam- eras have been deployed in many locations. Surveillance cam- eras are most useful when they are spaced out to maximize coverage of an area. However, deciding where to place cam- eras is an NP-hard problem and researchers have proposed heuristic solutions. Existing work does not consider a signifi- cant restriction of computer vision: in order to track a moving object, the object must occupy enough pixels. The number of pixels depends on many factors (how far away is the object? What is the camera resolution? What is the focal length?). In this study we propose a camera placement method that not only identifies effective camera placement in arbitrary spaces, but can account for different camera types as well. Our strat- egy represents spaces as polygons, then uses a greedy algo- rithm to partition the polygons and determine the cameras’ lo- cations to provide desired coverage. The solution also makes it possible to perform object tracking via overlapping camera placement. Our method is evaluated against complex shapes and real-world museum floor plans, achieving up to 82% cov- erage and 28% overlap
    • …
    corecore